Version: Single Wave- Interactive Notebook.
BASS: Biomedical Analysis Software Suite for event detection and signal processing.
Copyright (C) 2015 Abigail Dobyns
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
Run the following code block to intialize the program. This notebook and bass.py file must be in the same folder.
In [ ]:
from bass import *
For help, check out the wiki: Protocol
Or the video tutorial: Coming Soon!
Use the following block to change your settings. You must use this block.
Here are some helpful information about the loading settings:
Full File Path to Folder containing file: Designate the path to your file to load. It can also be the relative path to the folder where this notebook is stored. This does not include the file itself.
Mac OSX Example: '/Users/MYNAME/Documents/bass'
Microsoft Example: 'C:\\Users\MYNAME\Documents\bass'
File name: This is the name of your data file. It should include the file type. This file should NOT have a header and the first column must be time in seconds. Note: This file name will also appear as part of the output files names.
'rat34_ECG.txt'
Full File Path for data output: Designate the location of the folder where you would like the folder containing your results to go. If the folder does not exist, then it will be created. A plots folder, called 'plots' will be created inside this folder for you if it does not already exist.
Mac OSX Example: '/Users/MYNAME/Documents/output'
Microsoft Example: 'C:\\Users\MYNAME\Documents\output'
In [ ]:
Data, Settings, Results = load_interact()
Use this block to check any slicing you need to do to cut out problematic data from the head or tail. You can click on any point in the wave to get the (x,y) location of that point. Clipping inside this notebook is not supported at this time.
In [ ]:
plot_rawdata(Data)
Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz.
In [ ]:
#optional
Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency
Settings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve.
Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph:
raw = s^2/Hz
db = dB/Hz = 10*log10(s^2/Hz)
Graph and table are automatically saved in the PSD-Signal
subfolder.
In [ ]:
scale = 'raw' #raw or db
Results = psd_signal(version = 'original', key = 'Mean1', scale = scale,
Data = Data, Settings = Settings, Results = Results)
Results['PSD-Signal']
Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run.
version = 'original'
key = 'Mean1'
After transformation is run, you can call version = 'trans'. This graph is not automatically saved.
In [ ]:
version = 'original'
key = 'Mean1'
spectogram(version, key, Data, Settings, Results)
Must be done for each new uploaded data file.
WARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Must be a previously outputed BASS settings file, although the name can be changed. Expected format is '.csv'. Enter the full file path and name.
Mac OSX Example: '/Users/MYNAME/Documents/rat34_Settings.csv'
Microsoft Example: 'C:\\Users\MYNAME\Documents\rat34_Settings.csv'
See above instructions for how to load your data file.
Warning!! You must load a settings file or specify your settings below. There are no defaults
In [ ]:
Settings = load_settings_interact(Settings)
Settings_display = display_settings(Settings)
Settings_display
WARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Enter the parameters of the functions you would like to use to transform your data. If you do not want to use a function, enter 'none'
For more help on settings:
In [ ]:
Settings = user_input_trans(Settings)
In [ ]:
Data, Settings = transform_wrapper(Data, Settings)
graph_ts(Data, Settings, Results)
WARNING If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Linear - takes a user specified time segment as a good representation of baseline. If the superstructure is linear but has a slope, use linear fit in the transformation to still use linear. Linear automatically shifts your data by the ammount of your baseline normalize the baseline to zero.
Rolling - rolling mean of the data is generated based on a moving window. User provides the window size in miliseconds. there is no shift in the data with this method.
Static - Skips baseline generatoin and allows you to choose an arbitrary y value for threshold. No Shift in the data.
In [ ]:
Settings = user_input_base(Settings)
In [ ]:
Data, Settings, Results = baseline_wrapper(Data, Settings, Results)
graph_ts(Data, Settings, Results)
In [ ]:
Settings_display = display_settings(Settings)
Settings_display
Peaks are local maxima, defined by local minima on either side of them. Click here for more information about this algorithm
Run the Following Block of code to enter or change peak detection settings. If you have loaded settings from a previous file, you do not need to run this block.
In [ ]:
Settings = event_peakdet_settings(Data, Settings)
In [ ]:
Results = event_peakdet_wrapper(Data, Settings, Results)
Results['Peaks-Master'].groupby(level=0).describe()
Use the block below to visualize event detection. Peaks are blue triangles. Valleys are pink triangles.
In [ ]:
graph_ts(Data, Settings, Results)
In [ ]:
Settings = event_burstdet_settings(Data, Settings, Results)
In [ ]:
Results = event_burstdet_wrapper(Data, Settings, Results)
Results['Bursts-Master'].groupby(level=0).describe()
In [ ]:
key = 'Mean1'
graph_ts(Data, Settings, Results, key)
In [ ]:
Save_Results(Data, Settings, Results)
Now that events are detected, you can analyze them using any of the optional blocks below.
More information about how to use this
In [ ]:
#grouped summary for peaks
Results['Peaks-Master'].groupby(level=0).describe()
In [ ]:
#grouped summary for bursts
Results['Bursts-Master'].groupby(level=0).describe()
In [ ]:
#Batch
event_type = 'Peaks'
meas = 'all'
Results = poincare_batch(event_type, meas, Data, Settings, Results)
pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})
In [ ]:
#quick
event_type = 'Bursts'
meas = 'Burst Duration'
key = 'Mean1'
poincare_plot(Results[event_type][key][meas])
In [ ]:
key = 'Mean1'
start =100 #start time in seconds
end= 101 #end time in seconds
results_timeseries_plot(key, start, end, Data, Settings, Results)
In [ ]:
#autocorrelation
key = 'Mean1'
start = 0 #seconds, where you want the slice to begin
end = 10 #seconds, where you want the slice to end.
autocorrelation_plot(Data['trans'][key][start:end])
plt.show()
In [ ]:
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1' #'Mean1' default for single wave
frequency_plot(event_type, meas, key, Data, Settings, Results)
The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended:
event_type = 'Bursts'
meas = 'Total Cycle Time'
key = 'Mean1'
scale = 'raw'
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Because this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'.
Use the code block below to specify your settings for event measurment PSD.
In [ ]:
Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.
Settings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve.
Use block below to return the PSD plot, as well as the power in the bands defined by the settings above.
In [ ]:
event_type = 'Bursts'
meas = 'Total Cycle Time'
key = 'Mean1'
scale = 'raw'
Results = psd_event(event_type, meas, key, scale, Data, Settings, Results)
Results['PSD-Event'][key]
In [ ]:
#Get average plots, display only
event_type = 'peaks'
meas = 'Peaks Amplitude'
average_measurement_plot(event_type, meas,Results)
Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table). Saves out the dataframes of these three results automatically with the window size in the name as a .csv. If meas == 'All', then the function will loop and produce these tables for all measurements.
event_type = 'Peaks'
meas = 'all'
window = 30
In [ ]:
#Moving Stats
event_type = 'Peaks'
meas = 'all'
window = 30 #seconds
Results = moving_statistics(event_type, meas, window, Data, Settings, Results)
Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely.
If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1
event_type = 'Bursts'
meas = 'all'
In [ ]:
#Histogram Entropy
event_type = 'Bursts'
meas = 'all'
Results = histent_wrapper(event_type, meas, Data, Settings, Results)
Results['Histogram Entropy']
You can run another file be going back the the Begin User Input section and chose another file path.
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all'
Interpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
In [ ]:
#Approximate Entropy
event_type = 'Peaks'
meas = 'all'
Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Approximate Entropy']
In [ ]:
#Approximate Entropy on raw signal
#takes a VERY long time
from pyeeg import ap_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call samp_entropy directly. Supports 'all'
In [ ]:
#Sample Entropy
event_type = 'Bursts'
meas = 'all'
Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Sample Entropy']
In [ ]:
#on raw signal
#takes a VERY long time
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
you're still here, reading? you must be a dedicated super user!
If that is the case, then you must know how to code in Python. Use this space to get crazy with your own advanced analysis and stuff.
In [ ]: